Matrix Operations

import numpy as np
import sympy as sy

sy.init_printing()
np.set_printoptions(precision=3)
np.set_printoptions(suppress=True)
from IPython.core.interactiveshell import InteractiveShell

InteractiveShell.ast_node_interactivity = "all"  # display multiple lines
def round_expr(expr, num_digits):
    return expr.xreplace({n: round(n, num_digits) for n in expr.atoms(sy.Number)})

Matrix addition operations are straightforward: 1. \(A+ B= B+ A\) 2. \((A+B)+ C=A+(B+C)\) 3. \(c(A+B)=cA+cB\) 4. \((c+d)A=cA+c{D}\) 5. \(c(dA)=(cd)A\) 6. \(A+{0}=A\), where \({0}\) is the zero matrix 7. For any \(A\), there exists an \(- A\), such that $ A+(- A)=0$.

They are as obvious as it looks, so no proofs are provided. And the matrix multiplication properties are: 1. $ A({BC})=({AB}) C$ 2. \(c({AB})=(cA)B=A(cB)\) 3. \(A(B+ C)={AB}+{AC}\) 4. \((B+C)A={BA}+{CA}\)

Note that we need to differentiate between two types of multiplication: Hadamard multiplication, denoted as \(A \odot B\) (element-wise multiplication), and matrix multiplication, denoted simply as \(AB\).

A = np.array([[1, 2], [3, 4]])
B = np.array([[5, 6], [7, 8]])
A * B  # this is Hadamard elementwise product
array([[ 5, 12],
       [21, 32]])
A @ B  # this is matrix product
array([[19, 22],
       [43, 50]])

Let’s show explicitly the matrix multiplication rule for each element:

np.sum(A[0, :] * B[:, 0])  # element at (1, 1)
np.sum(A[1, :] * B[:, 0])  # element at (2, 1)
np.sum(A[0, :] * B[:, 1])  # element at (1, 2)
np.sum(A[1, :] * B[:, 1])  # element at (2, 2)
19
43
22
50

SymPy Demonstration: Addition

Let’s define all the letters as symbols in case we need to use them repeatedly. With this library, we can perform computations analytically, making it a valuable tool for learning linear algebra.

a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z = (
    sy.symbols(
        "a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z",
        real=True,
    )
)
A = sy.Matrix([[a, b, c], [d, e, f]])
A + A
A - A

\(\displaystyle \left[\begin{matrix}2 a & 2 b & 2 c\\2 d & 2 e & 2 f\end{matrix}\right]\)

\(\displaystyle \left[\begin{matrix}0 & 0 & 0\\0 & 0 & 0\end{matrix}\right]\)

B = sy.Matrix([[g, h, i], [j, k, l]])
A + B
A - B

\(\displaystyle \left[\begin{matrix}a + g & b + h & c + i\\d + j & e + k & f + l\end{matrix}\right]\)

\(\displaystyle \left[\begin{matrix}a - g & b - h & c - i\\d - j & e - k & f - l\end{matrix}\right]\)

SymPy Demonstration: Multiplication

The matrix multiplication rules can be clearly understood by using symbols.

A = sy.Matrix([[a, b, c], [d, e, f]])
B = sy.Matrix([[g, h, i], [j, k, l], [m, n, o]])
A
B

\(\displaystyle \left[\begin{matrix}a & b & c\\d & e & f\end{matrix}\right]\)

\(\displaystyle \left[\begin{matrix}g & h & i\\j & k & l\\m & n & o\end{matrix}\right]\)

AB = A * B
AB

\(\displaystyle \left[\begin{matrix}a g + b j + c m & a h + b k + c n & a i + b l + c o\\d g + e j + f m & d h + e k + f n & d i + e l + f o\end{matrix}\right]\)

Commutability

Matrix multiplication usually does not commute, meaning \({AB} \neq {BA}\). For instance, consider matrices \(A\) and \(B\):

A = sy.Matrix([[3, 4], [7, 8]])
B = sy.Matrix([[5, 3], [2, 1]])
A * B
B * A

\(\displaystyle \left[\begin{matrix}23 & 13\\51 & 29\end{matrix}\right]\)

\(\displaystyle \left[\begin{matrix}36 & 44\\13 & 16\end{matrix}\right]\)

How do we find a commutable matrix? Let’s try find out analytically

A = sy.Matrix([[a, b], [c, d]])
B = sy.Matrix([[e, f], [g, h]])
A * B
B * A

\(\displaystyle \left[\begin{matrix}a e + b g & a f + b h\\c e + d g & c f + d h\end{matrix}\right]\)

\(\displaystyle \left[\begin{matrix}a e + c f & b e + d f\\a g + c h & b g + d h\end{matrix}\right]\)

To show \({AB} = {BA}\), we need to prove \({AB} - {BA} = 0\)

M = A * B - B * A
M

\(\displaystyle \left[\begin{matrix}b g - c f & a f - b e + b h - d f\\- a g + c e - c h + d g & - b g + c f\end{matrix}\right]\)

That gives us a system of linear equations \[ \begin{align} b g - c f&=0 \\ a f - b e + b h - d f&=0\\ - a g + c e - c h + d g&=0 \\ - b g + c f&=0 \end{align} \]

To solve it, we can use the Gauss-Jordan elimination method. If we treat \(a, b, c, d\) as coefficients of the system, we can extract an augmented matrix. \[ \begin{align*} b g-c f=0 \quad &\Rightarrow-c f+b g+0 e+0 h=0\\ a f-b e+b h-d f=0 \quad &\Rightarrow(a-d) f+0 g-b e+b h=0\\ -a g+c e-c h+d g=0 \quad &\Rightarrow 0 f+(d-a) g+c e-c h=0\\ -b g+c f=0 \quad &\Rightarrow c f-b g+0 e+0 h=0 \end{align*} \]

So the augmented matrix takes the form \[ \begin{equation} \left[\begin{array}{cccc:c} -c & b & 0 & 0 & 0 \\ a-d & 0 & -b & b & 0 \\ 0 & d-a & c & -c & 0 \\ c & -b & 0 & 0 & 0 \end{array}\right] \end{equation} \]

A_aug = sy.Matrix([[-c, b, 0, 0], [a - d, 0, -b, b], [0, d - a, c, -c], [c, -b, 0, 0]])
A_aug

\(\displaystyle \left[\begin{matrix}- c & b & 0 & 0\\a - d & 0 & - b & b\\0 & - a + d & c & - c\\c & - b & 0 & 0\end{matrix}\right]\)

Perform Gaussian-Jordon elimination till row reduced formed.

A_aug.rref()

\(\displaystyle \left( \left[\begin{matrix}1 & 0 & - \frac{b}{a - d} & \frac{b}{a - d}\\0 & 1 & \frac{b c}{- a b + b d} & \frac{c}{a - d}\\0 & 0 & 0 & 0\\0 & 0 & 0 & 0\end{matrix}\right], \ \left( 0, \ 1\right)\right)\)

The general solution is \[ \begin{equation} \left(\begin{array}{l} e \\ f \\ g \\ h \end{array}\right)=c_1\left(\begin{array}{c} \frac{b}{a-d} \\ \frac{b c}{a b-b d} \\ 1 \\ 0 \end{array}\right)+c_2\left(\begin{array}{c} -\frac{b}{a-d} \\ -\frac{c}{a-d} \\ 0 \\ 1 \end{array}\right) \end{equation} \]

import sympy as sp

# Define symbolic entries for A (2x2 matrix)
a, b, c, d = sp.symbols("a b c d")

# Define symbolic entries for B (2x2 matrix)
e, f, g, h = sp.symbols("e f g h")

# Define matrices A and B
A = sp.Matrix([[a, b], [c, d]])
B = sp.Matrix([[e, f], [g, h]])

# Compute AB and BA
AB = A * B
BA = B * A

# Set up equations AB = BA
equations = [sp.Eq(AB[i, j], BA[i, j]) for i in range(2) for j in range(2)]

# Solve the system of equations
solution = sp.solve(equations, (e, f, g, h))

# Print the general solutions
print("Solution for B elements:")
for sol in solution:
    print(f"{sol}: {solution[sol]}")

# To express the solution in a parameterized form, let's substitute specific values
# Define parameters c1 and c2
c1, c2 = sp.symbols("c1 c2")

# Define the parameterized solution
parametric_solution = {
    e: solution[e].subs({g: c1, h: c2}),
    f: solution[f].subs({g: c1, h: c2}),
    g: c1,
    h: c2,
}

# Print the parameterized solution in LaTeX format
print("\nParameterized solution in LaTeX format:")
for sol in parametric_solution:
    print(f"{sp.latex(sol)}: {sp.latex(parametric_solution[sol])}")
Solution for B elements:
e: h + g*(a - d)/c
f: b*g/c

Parameterized solution in LaTeX format:
e: c_{2} + \frac{c_{1} \left(a - d\right)}{c}
f: \frac{b c_{1}}{c}
g: c_{1}
h: c_{2}

\[ e: c_{2} + \frac{c_{1} \left(a - d\right)}{c}\\ f: \frac{b c_{1}}{c}\\ g: c_{1}\\ h: c_{2}\\ \]

Transpose of Matrices

Matrix \(A_{n\times m}\) and its transpose is

A = sy.Matrix([[1, 2, 3], [4, 5, 6]])
A
A.transpose()

\(\displaystyle \left[\begin{matrix}1 & 2 & 3\\4 & 5 & 6\end{matrix}\right]\)

\(\displaystyle \left[\begin{matrix}1 & 4\\2 & 5\\3 & 6\end{matrix}\right]\)

The properties of transpose are 1. \((A^T)^T\) 2. \((A+B)^T=A^T+B^T\) 3. \((cA)^T=cA^T\) 4. \((AB)^T=B^TA^T\)

We can show why the last property holds with SymPy, define \(A\) and \(B\), multiply them, then transpose, that means \((AB)^T\)

A = sy.Matrix([[a, b], [c, d], [e, f]])
B = sy.Matrix([[g, h, i], [j, k, l]])
AB = A * B
AB_t = AB.transpose()
AB_t

\(\displaystyle \left[\begin{matrix}a g + b j & c g + d j & e g + f j\\a h + b k & c h + d k & e h + f k\\a i + b l & c i + d l & e i + f l\end{matrix}\right]\)

Transpose \(A\) and \(B\), then multiply, meaning \(B^TA^T\)

B_t_A_t = B.transpose() * A.transpose()
B_t_A_t

\(\displaystyle \left[\begin{matrix}a g + b j & c g + d j & e g + f j\\a h + b k & c h + d k & e h + f k\\a i + b l & c i + d l & e i + f l\end{matrix}\right]\)

Check if they are equal

AB_t == B_t_A_t
True

Identity Matrices

This is an identity matrix \(I_5\), only \(1\)’s on principal diagonal, all rest elements are \(0\)’s.

sy.eye(5)

\(\displaystyle \left[\begin{matrix}1 & 0 & 0 & 0 & 0\\0 & 1 & 0 & 0 & 0\\0 & 0 & 1 & 0 & 0\\0 & 0 & 0 & 1 & 0\\0 & 0 & 0 & 0 & 1\end{matrix}\right]\)

Identity matrix properties:

\[ AI=IA = A \]

Let’s generate $ I$ and $ A$ and show if it holds

I = np.eye(5)
I
array([[1., 0., 0., 0., 0.],
       [0., 1., 0., 0., 0.],
       [0., 0., 1., 0., 0.],
       [0., 0., 0., 1., 0.],
       [0., 0., 0., 0., 1.]])
A = np.around(np.random.rand(5, 5) * 100)
A  # generate a random matrix
array([[ 75.,  90.,  64., 100.,  23.],
       [ 14.,  10.,  48.,   2.,  89.],
       [ 43.,  68.,  77.,  28.,  74.],
       [ 26.,  22.,  43.,  13.,  25.],
       [ 62.,  16.,  40.,  17.,   6.]])

Check if they are equal

(A @ I == I @ A).all()
True

Elementary Matrix

An elementary matrix is a matrix that can be obtained from a single elementary row operation on an identity matrix. Such as:

\[ \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right] \begin{array}{c} R_1 \leftrightarrow R_2 \\ ~ \\ ~ \end{array} \qquad \Longrightarrow \qquad \left[ \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right] \]

Where \(R_1 \leftrightarrow R_2\) means exchanging row 1 and row 2, we denote the transformed matrix as \(E\). Then, left multiply \(E\) onto a matrix \(A\) will perform the exact the same row operation.

First, generate matrix \(A\).

A = sy.randMatrix(3, percent=80)
A  # generate a random matrix with 80% of entries being nonzero

\(\displaystyle \left[\begin{matrix}0 & 42 & 21\\10 & 93 & 48\\15 & 0 & 33\end{matrix}\right]\)

Create an elementary matrix with \(R_1\leftrightarrow R_2\)

E = sy.Matrix([[0, 1, 0], [1, 0, 0], [0, 0, 1]])
E

\(\displaystyle \left[\begin{matrix}0 & 1 & 0\\1 & 0 & 0\\0 & 0 & 1\end{matrix}\right]\)

Notice that by left-multiplying \(E\) onto \(A\), matrix \(A\) also switches rows 1 and 2.

E * A

\(\displaystyle \left[\begin{matrix}10 & 93 & 48\\0 & 42 & 21\\15 & 0 & 33\end{matrix}\right]\)

Adding a multiple of a row onto another row in the identity matrix also gives us an elementary matrix.

$$

\

\[\begin{array}{c} R_3-7R_1 \end{array} \longrightarrow \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ -7 & 0 & 1 \end{array} \right]\]

$$

Let’s verify with SymPy.

A = sy.randMatrix(3, percent=80)
A
E = sy.Matrix([[1, 0, 0], [0, 1, 0], [-7, 0, 1]])
E

\(\displaystyle \left[\begin{matrix}0 & 0 & 63\\80 & 60 & 56\\16 & 2 & 7\end{matrix}\right]\)

\(\displaystyle \left[\begin{matrix}1 & 0 & 0\\0 & 1 & 0\\-7 & 0 & 1\end{matrix}\right]\)

We will see the \(R_3-7R_1\) takes places on \(A\)

E * A

\(\displaystyle \left[\begin{matrix}0 & 0 & 63\\80 & 60 & 56\\16 & 2 & -434\end{matrix}\right]\)

We can also reproduce this by explicit row operation on $ A$.

EA = sy.matrices.MatrixBase.copy(A)
EA[2, :] = -7 * EA[0, :] + EA[2, :]
EA

\(\displaystyle \left[\begin{matrix}0 & 0 & 63\\80 & 60 & 56\\16 & 2 & -434\end{matrix}\right]\)

In the next section, we will refresh an important conclusion: an invertible matrix is a product of a series of elementary matrices.

Inverse Matrices

If \({AB}={BA}=\mathbf{I}\), $ B$ is called the inverse of matrix $ A$, denoted as $ B= A^{-1}$.

NumPy has convenient function np.linalg.inv() for computing inverse matrices. Generate $ A$

A = np.round(10 * np.random.randn(5, 5))
A
array([[  7.,  -6.,  -2.,   3.,  14.],
       [ -5.,   3.,   8., -20.,   7.],
       [ -2.,  -8.,  -2.,   2., -16.],
       [  2., -14.,  -5.,  -2., -17.],
       [  6.,  -8.,   4.,  13.,   8.]])
Ainv = np.linalg.inv(A)
Ainv
array([[-0.266, -0.05 , -0.569,  0.356,  0.127],
       [-0.127, -0.039, -0.213,  0.088,  0.016],
       [-0.169,  0.035, -0.173,  0.101,  0.135],
       [ 0.022, -0.023,  0.093, -0.074,  0.011],
       [ 0.121,  0.018,  0.149, -0.11 , -0.04 ]])

Verify if they are truly inverse of each other

A @ Ainv
array([[ 1.,  0., -0., -0., -0.],
       [ 0.,  1.,  0., -0., -0.],
       [ 0.,  0.,  1.,  0.,  0.],
       [ 0., -0.,  0.,  1.,  0.],
       [ 0., -0., -0.,  0.,  1.]])

The -0. means there are more digits after point, but omitted here.

Gauss-Jordan Elimination Method for Matrix Inversion

A convenient way to calculate the inverse of a matrix is to construct an augmented matrix \([A \,|\, \mathbf{I}]\). Then, multiply a series of elementary matrices \(E\) (representing elementary row operations) until matrix \(A\) is in row-reduced form. If \(A\) is of full rank, this process will transform \(A\) into an identity matrix \(\mathbf{I}\). Consequently, the identity matrix on the right-hand side of the augmented matrix will be converted into \(A^{-1}\) automatically.

We can demonstrate this using SymPy’s .rref() function on the augmented matrix \([A \,|\, \mathbf{I}]\).

AI = np.hstack((A, I))  # stack the matrix A and I horizontally
AI = sy.Matrix(AI)
AI

\(\displaystyle \left[\begin{matrix}7.0 & -6.0 & -2.0 & 3.0 & 14.0 & 1.0 & 0.0 & 0.0 & 0.0 & 0.0\\-5.0 & 3.0 & 8.0 & -20.0 & 7.0 & 0.0 & 1.0 & 0.0 & 0.0 & 0.0\\-2.0 & -8.0 & -2.0 & 2.0 & -16.0 & 0.0 & 0.0 & 1.0 & 0.0 & 0.0\\2.0 & -14.0 & -5.0 & -2.0 & -17.0 & 0.0 & 0.0 & 0.0 & 1.0 & 0.0\\6.0 & -8.0 & 4.0 & 13.0 & 8.0 & 0.0 & 0.0 & 0.0 & 0.0 & 1.0\end{matrix}\right]\)

AI_rref = AI.rref()
AI_rref

\(\displaystyle \left( \left[\begin{matrix}1 & 0 & 0 & 0 & 0 & -0.265934065934066 & -0.0498285714285714 & -0.56916043956044 & 0.355938461538462 & 0.127032967032967\\0 & 1 & 0 & 0 & 0 & -0.127472527472527 & -0.0386285714285714 & -0.213283516483517 & 0.0875076923076923 & 0.0162637362637363\\0 & 0 & 1 & 0 & 0 & -0.169230769230769 & 0.0352 & -0.172738461538462 & 0.101415384615385 & 0.135384615384615\\0 & 0 & 0 & 1 & 0 & 0.021978021978022 & -0.0228571428571429 & 0.0931868131868132 & -0.0738461538461538 & 0.010989010989011\\0 & 0 & 0 & 0 & 1 & 0.120879120879121 & 0.0182857142857143 & 0.148527472527473 & -0.110153846153846 & -0.0395604395604396\end{matrix}\right], \ \left( 0, \ 1, \ 2, \ 3, \ 4\right)\right)\)

Extract the RHS block, this is the \(A^{-1}\).

Ainv = AI_rref[0][:, 5:]
Ainv  # extract the RHS block

\(\displaystyle \left[\begin{matrix}-0.265934065934066 & -0.0498285714285714 & -0.56916043956044 & 0.355938461538462 & 0.127032967032967\\-0.127472527472527 & -0.0386285714285714 & -0.213283516483517 & 0.0875076923076923 & 0.0162637362637363\\-0.169230769230769 & 0.0352 & -0.172738461538462 & 0.101415384615385 & 0.135384615384615\\0.021978021978022 & -0.0228571428571429 & 0.0931868131868132 & -0.0738461538461538 & 0.010989010989011\\0.120879120879121 & 0.0182857142857143 & 0.148527472527473 & -0.110153846153846 & -0.0395604395604396\end{matrix}\right]\)

I wrote a function to round the float numbers to the \(4\)-th digits, on the top of this file, just for sake of readability.

round_expr(Ainv, 4)

\(\displaystyle \left[\begin{matrix}-0.2659 & -0.0498 & -0.5692 & 0.3559 & 0.127\\-0.1275 & -0.0386 & -0.2133 & 0.0875 & 0.0163\\-0.1692 & 0.0352 & -0.1727 & 0.1014 & 0.1354\\0.022 & -0.0229 & 0.0932 & -0.0738 & 0.011\\0.1209 & 0.0183 & 0.1485 & -0.1102 & -0.0396\end{matrix}\right]\)

We can verify if \(AA^{-1}=\mathbf{I}\)

A = sy.Matrix(A)
round_expr(A * Ainv, 4)

\(\displaystyle \left[\begin{matrix}1.0 & 0 & 0 & 0.0 & 0\\0.0 & 1.0 & 0.0 & 0.0 & 0.0\\0.0 & 0 & 1.0 & 0 & 0\\0.0 & 0.0 & 0.0 & 1.0 & 0.0\\0 & 0.0 & 0.0 & 0 & 1.0\end{matrix}\right]\)

We got \(\mathbf{I}\), which means the RHS block is indeed \(A^{-1}\).

An Example of Existence of Inverse

Determine the values of \(\lambda\) such that the matrix \[A=\left[ \begin{matrix}3 &\lambda &1\cr 2 & -1 & 6\cr 1 & 9 & 4\end{matrix}\right]\] is not invertible.

Still,we are using SymPy to solve the problem.

lamb = sy.symbols("lamda")  # SymPy will automatically render into LaTeX greek letters
A = np.array([[3, lamb, 1], [2, -1, 6], [1, 9, 4]])
I = np.eye(3)
A
array([[3, lamda, 1],
       [2, -1, 6],
       [1, 9, 4]], dtype=object)

Form the augmented matrix.

AI = np.hstack((A, I))
AI = sy.Matrix(AI)
AI

\(\displaystyle \left[\begin{matrix}3 & \lambda & 1 & 1.0 & 0.0 & 0.0\\2 & -1 & 6 & 0.0 & 1.0 & 0.0\\1 & 9 & 4 & 0.0 & 0.0 & 1.0\end{matrix}\right]\)

AI_rref = AI.rref()
AI_rref

\(\displaystyle \left( \left[\begin{matrix}1 & 0 & 0 & \frac{116.0}{4.0 \lambda + 310.0} & \frac{8.0 \lambda - 18.0}{4.0 \lambda + 310.0} & \frac{- 12.0 \lambda - 2.0}{4.0 \lambda + 310.0}\\0 & 1 & 0 & \frac{4.0}{4.0 \lambda + 310.0} & - \frac{22.0}{4.0 \lambda + 310.0} & \frac{32.0}{4.0 \lambda + 310.0}\\0 & 0 & 1 & \frac{57.0}{- 6 \lambda - 465} & \frac{27.0 - 1.0 \lambda}{2.0 \lambda + 155.0} & \frac{2.0 \lambda + 3.0}{2.0 \lambda + 155.0}\end{matrix}\right], \ \left( 0, \ 1, \ 2\right)\right)\)

To make the matrix \(A\) invertible we notice that is multiple conditions to be satisfied, (in the denominators): \[ \begin{align*} -6\lambda -465 &\neq0\\ 4 \lambda + 310 &\neq 0\\ 2 \lambda + 155 &\neq 0 \end{align*} \] However, they are actually one condition, because they are multiples of each other.

Solve for \(\lambda\)’s.

sy.solvers.solve(-6 * lamb - 465, lamb)

\(\displaystyle \left[ - \frac{155}{2}\right]\)

So this is the \(\lambda\) that makes the matrix invertible. Let’s test this with the determinant. If \(|A| = 0\), then the matrix is not invertible, so plug in \(\lambda\) back in \(A\) then calculate determinant. Don’t worry about determinants; we will revisit this topic later.

A = np.array([[3, -155 / 2, 1], [2, -1, 6], [1, 9, 4]])
np.linalg.det(A)

\(\displaystyle 0.0\)

The \(| A|\) is \(0\).

So we found that one condition, as long as \(\lambda \neq -\frac{155}{2}\), the matrix \(A\) is invertible.

Properties of Inverse Matrices

  1. If \(A\) and \(B\) are both invertible, then \((AB)^{-1}=B^{-1}A^{-1}\).
  2. If \(A\) is invertible, then \((A^T)^{-1}=(A^{-1})^T\).
  3. If \(A\) and \(B\) are both invertible and symmetric such that \(AB=BA\), then \(A^{-1}B\) is symmetric.

The first property is straightforward \[ \begin{align} ABB^{-1}A^{-1}=AIA^{-1}=I=AB(AB)^{-1} \end{align} \]

The trick of second property is to show that \[ A^T(A^{-1})^T = I \] We can use the property of transpose \[ A^T(A^{-1})^T=(A^{-1}A)^T = I^T = I \]

The third property is to show \[ A^{-1}B = (A^{-1}B)^T \] Again use the property of transpose \[ (A^{-1}B)^{T}=B^T(A^{-1})^T=B(A^T)^{-1}=BA^{-1} \] We use the \(AB = BA\) condition to proceed \[\begin{align*} AB&=BA\\ A^{-1}ABA^{-1}&=A^{-1}BAA^{-1}\\ BA^{-1}&=A^{-1}B \end{align*}\] The plug in the previous equation, we have \[ (A^{-1}B)^{T}=BA^{-1}=A^{-1}B \]